27 research outputs found

    Monitoring the Bittorrent Monitors: A Bird’s Eye View

    Full text link
    Abstract. Detecting clients with deviant behavior in the Bittorrent net-work is a challenging task that has not received the deserved attention. Typically, this question is seen as not ’politically ’ correct, since it is as-sociated with the controversial issue of detecting copyright protection agencies. However, deviant behavior detection and associated blacklists might prove crucial for the well being of Bittorrent. We find that there are other deviant entities in Bittorrent besides monitors. Our goal is to provide some initial heuristics that can be used to automatically detect deviant clients. We analyze for 45 days the top 600 torrents of Pirate Bay. We show that the empirical observation of Bittorrent clients can be used to detect deviant behavior, and consequently, it is possible to automatically build dynamic blacklists.

    BitTorrent locality and transit trafficreduction: When, why, and at what cost?

    Get PDF
    A substantial amount of work has recently gone into localizing BitTorrent traffic within an ISP in order to avoid excessive and often times unnecessary transit costs. Several architectures and systems have been proposed and the initial results from specific ISPs and a few torrents have been encouraging. In this work we attempt to deepen and scale our understanding of locality and its potential. Looking at specific ISPs, we consider tens of thousands of concurrent torrents, and thus capture ISP-wide implications that cannot be appreciated by looking at only a handful of torrents. Second, we go beyond individual case studies and present results for few thousands ISPs represented in our data set of up to 40K torrents involving more than 3.9M concurrent peers and more than 20M in the course of a day spread in 11K ASes. Finally, we develop scalable methodologies that allow us to process this huge data set and derive accurate traffic matrices of torrents. Using the previous methods we obtain the following main findings: i) Although there are a large number of very small ISPs without enough resources for localizing traffic, by analyzing the 100 largest ISPs we show that Locality policies are expected to significantly reduce the transit traffic with respect to the default random overlay construction method in these ISPs; ii) contrary to the popular belief, increasing the access speed of the clients of an ISP does not necessarily help to localize more traffic; iii) by studying several real ISPs, we have shown that soft speed-aware locality policies guarantee win-win situations for ISPs and end users. Furthermore, the maximum transit traffic savings that an ISP can achieve without limiting the number of inter-ISP overlay links is bounded by “unlocalizable” torrents with few local clients. The application of restrictions in the number of inter-ISP links leads to a higher transit traffic reduction but the QoS of clients downloading “unlocalizable” torrents would be severely harmed.The research leading to these results has been partially funded by the European Union's FP7 Program under the projects eCOUSIN (318398) and TREND (257740), the Spanish Ministry of Economy and Competitiveness under the eeCONTENT project (TEC2011-29688-C02-02), and the Regional Government of Madrid under the MEDIANET Project (S2009/TIC-1468).Publicad

    Deep diving into BitTorrent locality

    Full text link
    a

    Analyzing BGP Policies: Methodology and Tool

    No full text
    The robustness of the Internet relies heavily on the robustness of BGP routing. BGP is the glue that holds the Internet together: it is the common language of the routers that interconnect networks or Autonomous Systems(AS). The robustness of BGP and our ability to manage it effectively is hampered by the limited global knowledge and lack of coordination between Autonomous Systems. One of the few efforts to develop a globally analyzable and secure Internet is the creation of the Internet Routing Registries (IRRs). IRRs provide a voluntary detailed repository of BGP policy information. The IRR effort has not reached its full potential because of two reasons: a) extracting useful information is far from trivial, and b) its accuracy of the data is uncertain

    Facilitating Real-Time Graph Mining

    No full text
    Real-time data processing is increasingly gaining momentum as the preferred method for analytical applications. Many of these applications are built on top of large graphs with hundreds of millions of vertices and edges. A fundamental requirement for real-time processing is the ability to do incremental processing. However, graph algorithms are inherently difficult to compute incrementally due to data dependencies. At the same time, devising incremental graph algorithms is a challenging programming task. This paper introduces GraphInc, a system that builds on top of the Pregel model and provides efficient incremental processing of graphs. Importantly, GraphInc supports incremental computations automatically, hiding the complexity from the programmers. Programmers write graph analytics in the Pregel model without worrying about the continuous nature of the data. GraphInc integrates new data in real-time in a transparent manner, by automatically identifying opportunities for incremental processing. We discuss the basic mechanisms of GraphInc and report on the initial evaluation of our approach
    corecore